59 research outputs found

    Graphical update caching for mobile thin clients

    Get PDF

    Transfer Learning with Binary Neural Networks

    Get PDF
    Previous work has shown that it is possible to train deep neural networks with low precision weights and activations. In the extreme case it is even possible to constrain the network to binary values. The costly floating point multiplications are then reduced to fast logical operations. High end smart phones such as Google's Pixel 2 and Apple's iPhone X are already equipped with specialised hardware for image processing and it is very likely that other future consumer hardware will also have dedicated accelerators for deep neural networks. Binary neural networks are attractive in this case because the logical operations are very fast and efficient when implemented in hardware. We propose a transfer learning based architecture where we first train a binary network on Imagenet and then retrain part of the network for different tasks while keeping most of the network fixed. The fixed binary part could be implemented in a hardware accelerator while the last layers of the network are evaluated in software. We show that a single binary neural network trained on the Imagenet dataset can indeed be used as a feature extractor for other datasets.Comment: Machine Learning on the Phone and other Consumer Devices, NIPS2017 Worksho

    Resource-constrained classification using a cascade of neural network layers

    Get PDF
    Deep neural networks are the state of the art technique for a wide variety of classification problems. Although deeper networks are able to make more accurate classifications, the value brought by an additional hidden layer diminishes rapidly. Even shallow networks are able to achieve relatively good results on various classification problems. Only for a small subset of the samples do the deeper layers make a significant difference. We describe an architecture in which only the samples that can not be classified with a sufficient confidence by a shallow network have to be processed by the deeper layers. Instead of training a network with one output layer at the end of the network, we train several output layers, one for each hidden layer. When an output layer is sufficiently confident in this result, we stop propagating at this layer and the deeper layers need not be evaluated. The choice of a threshold confidence value allows us to trade-off accuracy and speed. Applied in the Internet-of-things (IoT) context, this approach makes it possible to distribute the layers of a neural network between low powered devices and powerful servers in the cloud. We only need the remote layers when the local layers are unable to make an accurate classification. Such an architecture adds the intelligence of a deep neural network to resource constrained devices such as sensor nodes and various IoT devices. We evaluated our approach on the MNIST and CIFAR10 datasets. On the MNIST dataset, we retain the same accuracy at half the computational cost. On the more difficult CIFAR10 dataset we were able to obtain a relative speed-up of 33% at an marginal increase in error rate from 15.3% to 15.8%

    Self management of a mobile thin client service

    Get PDF
    Mobile thin client computing is an enabler for the execution of demanding applications from mobile handhelds. In thin client computing, the application is executed on remote servers and the mobile handheld only has to display the graphical updates and send input from the user to the remote execution environment. To guarantee a high user experience in a mobile environment, a Service Management Framework is required to prevent users observing lower Quality of Experience due to changes in the available network, server and client resources. Therefore, the Service Management Framework monitors the environment and the Self Management component intervenes when necessary, e.g. by adapting the thin client protocol settings or moving a user session from one server to another. The design of the Self Management component is presented and the performance is evaluated

    Cloud-based desktop services for thin clients

    Get PDF
    Cloud computing and ubiquitous network availability have renewed people's interest in the thin client concept. By executing applications in virtual desktops on cloud servers, users can access any application from any location with any device. For this to be a successful alternative to traditional offline applications, however, researchers must overcome important challenges. The thin client protocol must display audiovisual output fluidly, and the server executing the virtual desktop should have sufficient resources and ideally be close to the user's current location to limit network delay. From a service provider viewpoint, cost reduction is also an important issue

    Integrating personal media and digital TV with QoS guarantees using virtualized set-top boxes: architecture and performance measurements

    Get PDF
    Nowadays, users consume a lot of functionality in their home coming from a service provider located in the Internet. While the home network is typically shielded off as much as possible from the `outside world', the supplied services could be greatly extended if it was possible to use local information. In this article, an extended service is presented that integrates the user's multimedia content, scattered over multiple devices in the home network, into the Electronic Program Guide (EPG) of the Digital TV. We propose to virtualize the set-top box, by migrating all functionality except user interfacing to the service provider infrastructure. The media in the home network is discovered through standard Universal Plug and Play (UPnP), of which the QoS functionality is exploited to ensure high quality playback over the home network, that basically is out of the control of the service provider. The performance of the subsystems are analysed

    Middleware platform for distributed applications incorporating robots, sensors and the cloud

    Get PDF
    Cyber-physical systems in the factory of the future will consist of cloud-hosted software governing an agile production process executed by autonomous mobile robots and controlled by analyzing the data from a vast number of sensors. CPSs thus operate on a distributed production floor infrastructure and the set-up continuously changes with each new manufacturing task. In this paper, we present our OSGibased middleware that abstracts the deployment of servicebased CPS software components on the underlying distributed platform comprising robots, actuators, sensors and the cloud. Moreover, our middleware provides specific support to develop components based on artificial neural networks, a technique that recently became very popular for sensor data analytics and robot actuation. We demonstrate a system where a robot takes actions based on the input from sensors in its vicinity
    corecore